Search Results: "Matthew Palmer"

30 October 2013

Matthew Palmer: How to deal with the "package gcc: " problem in Puppet

In a comment to yesterday s post on why your Puppet module sucks, Warren asks what can be done about the problem of multiple modules needing to include the same package (gcc being the most common example I m yet to come across). As I stated previously, there is no sane, standardised way for multiple independent modules to cooperate to ensure that certain packages are installed for their cooperative use. If it wasn t already obvious, I consider this to be a significant failing in Puppet, and one which essentially renders any attempt at public, wide-scale reuse of Puppet modules impossible. Software packages are such a fundamental part of system management today that it is rare for a module not to want to interact with them in some way. Without strong leadership from Puppet Labs, or someone core to the project who s willing to be very noisy and obnoxious on the subject, the issue is fundamentally unsolveable, because it requires either deep (and non-backwards-compatible) changes to how Puppet works at a core level, or it needs cooperation from everyone who writes Puppet modules for public consumption (to use a single, coordinated approach to common packages). The approaches that I ve seen people advocate using, or that I ve considered myself, roughly fall into the following camps.

Put your package into a class in your module Absofrickinglutely useless, because the elementary package resources you end up creating will still conflict with other modules package resources. Anyone who suggests this can be permanently ignored, as they don t understand the problem being solved.

Use a globally common module of classes representing common packages This could work, if you could get everyone who writes modules to use it (see someone willing to be very noisy and obnoxious ). Practically speaking, anyone who suggests this doesn t understand human nature. I don t see this actually happening any time soon.

Use the defined() function everywhere By wrapping all your package resources in if !defined(Package["foo"]) blocks, you can at least stop your module from causing explosions. What it doesn t do is make sure that the various definitions of the package resource are congruent (imagine if one was package "gcc": ensure => absent ). In order to safely avoid explosion, everyone would have to follow this approach, which (in the worst case) reduces the problem to the globally common module of classes. However, it s the least-worst approach I can practically consider. At least your module won t be the cause of explosions, and realistically no module s going to ask for gcc to be removed (or, if they are, you ll quickly find them and kill them). I ve seen calls from Puppet core dev team members to deprecate and remove the defined() function entirely. Given the complete lack of alternatives, this is a disturbing illustration of just how out of touch they are from the unfortunate realities of practical Puppetry.

Use virtual resources This is yet another variant of the common class technique that would require everyone, everywhere, to dance to the same tune. It has all the same problems, and hence gets a big thumbs-down from me.

Use singleton_packages This module is a valiant attempt to work around the problem, but practically speaking is no better than using defined() because again, if a module that isn t using singleton_packages specifies package "gcc": ensure => absent , you re going to end up with assplosion. The mandatory dependency on hiera doesn t win me over, either (I m a hiera-skeptic, for reasons which I might go into some other time).

Use only modules from a single source This is the solution that I use myself, if that s any recommendation As a result of pretty much every publically-available Puppet module sucking for one reason or another, I do not, currently, have any externally-written Puppet modules in the trees that I use. Every module (158, at current count) has been written by someone in the sysadmin team at $DAYJOB, to our common standards. This means that I can coordinate use of package resources, using our module of common classes where appropriate, or refactor modules to separate concerns appropriately. If you re wondering why we don t have all of these 158 modules available publically, well, we have a few of them up, but yes, the vast majority of them aren t publically available. Some of them suck mightily, while many others are just too intertwined with other modules to be able to use on their own, and we don t want to release crap code there s far too much of that out there already.

29 October 2013

Matthew Palmer: Why Your Puppet Module Sucks

I use Puppet a lot at work, and I use a lot of modules written by other people, as well as writing quite a number of my own. Here s a brief list of reasons why I might say that your module sucks.

1. You use global variables This would have to be both the most common idiom that just makes my teeth grind. Defined types exist for a bloody reason. Global variables make it incredibly difficult to reason about what is going to happen when I use a particular global variable (see also You don t write documentation ), and I get no feedback if I typo a global variable name. Add in a healthy heaping of lack of namespacing , and you ve basically guaranteed that your module will be loudly cursed.

2. You use parameterised classes I ve never managed to work out why parameterised classes even exist. They ve got all the problems of regular classes, as well as all the problems of types. There was a fantastic opportunity with parameterised classes to fix some of the really annoying things about regular resources, such as doing conflict checking on parameters and only barfing if there was a conflict but no, if you define the same class twice, even with identical parameters, Puppet will smack you on the hand and take away your biscuit. FFFFFFUUUUUUUUUUUU-

3. You fail at using fail() Modules are supposed to be reusable. That s their whole reason for existence. One of the benefits of Puppet is that you can provide an OS-independent interface for configuring something. However, unless you re some absolute God of cross-platform compatibility, you will only be writing your module to support those OSes or distros that you, personally, care about. That s cool if you don t know anything about OS X, you probably shouldn t be guessing at how to support something on that platform anyway. However, when you do have some sort of platform-specific code, for the love of pete, have a default or else clause that calls fail() with a useful and descriptive error message, to indicate clearly that you haven t included support for whatever environment the module s being used in. Failing to do this can cause some really spectacular explosions, because the rest of your module assumes that certain things have been done in the platform-specific code, and when it hasn t hoo boy.

4. You don t write documentation Yes, it isn t easy to write good documentation. I know that. But without documentation, your module is going to be practically unuseable. So if you don t have docs, your module is basically useless. Well done, you.

5. You have undeclared dependencies This also includes declaring your dependencies in a non-machine-parseable manner; I m not going to grovel through your README for the list of other modules I might need; I have machines to do that kind of thing for me. If I try to use your module, and it craps out on trying to reference some sort of type or class that doesn t appear at all related to your module, I will call into question your ancestry, and die a little more inside.

6. You use common packages without even trying to avoid the pitfalls OK, to be fair this is, ultimately, Puppet s grand fuckup, not yours, but you at least need to pretend to care There is no sane, standardised way in Puppet for multiple modules to install the same package. Let s say that a module I write needs to have a compiler. So I package "gcc": . Then someone else s module also wants a compiler, so it also package "gcc": . FWAKOOM! says Puppet. What the fuck? says the poor sysadmin, who just wanted both a virtualenv and an RVM environment on the one machine. Basically, using packages is going to make your module suck. Does that mean that this makes wide distribution of a large repository of modules written by different people nigh-on impossible? Yes. Fantastic.

Matthew Palmer: Why Your Puppet Module Sucks

I use Puppet a lot at work, and I use a lot of modules written by other people, as well as writing quite a number of my own. Here s a brief list of reasons why I might say that your module sucks.

1. You use global variables This would have to be both the most common idiom that just makes my teeth grind. Defined types exist for a bloody reason. Global variables make it incredibly difficult to reason about what is going to happen when I use a particular global variable (see also You don t write documentation ), and I get no feedback if I typo a global variable name. Add in a healthy heaping of lack of namespacing , and you ve basically guaranteed that your module will be loudly cursed.

2. You use parameterised classes I ve never managed to work out why parameterised classes even exist. They ve got all the problems of regular classes, as well as all the problems of types. There was a fantastic opportunity with parameterised classes to fix some of the really annoying things about regular resources, such as doing conflict checking on parameters and only barfing if there was a conflict but no, if you define the same class twice, even with identical parameters, Puppet will smack you on the hand and take away your biscuit. FFFFFFUUUUUUUUUUUU-

3. You fail at using fail() Modules are supposed to be reusable. That s their whole reason for existence. One of the benefits of Puppet is that you can provide an OS-independent interface for configuring something. However, unless you re some absolute God of cross-platform compatibility, you will only be writing your module to support those OSes or distros that you, personally, care about. That s cool if you don t know anything about OS X, you probably shouldn t be guessing at how to support something on that platform anyway. However, when you do have some sort of platform-specific code, for the love of pete, have a default or else clause that calls fail() with a useful and descriptive error message, to indicate clearly that you haven t included support for whatever environment the module s being used in. Failing to do this can cause some really spectacular explosions, because the rest of your module assumes that certain things have been done in the platform-specific code, and when it hasn t hoo boy.

4. You don t write documentation Yes, it isn t easy to write good documentation. I know that. But without documentation, your module is going to be practically unuseable. So if you don t have docs, your module is basically useless. Well done, you.

5. You have undeclared dependencies This also includes declaring your dependencies in a non-machine-parseable manner; I m not going to grovel through your README for the list of other modules I might need; I have machines to do that kind of thing for me. If I try to use your module, and it craps out on trying to reference some sort of type or class that doesn t appear at all related to your module, I will call into question your ancestry, and die a little more inside.

6. You use common packages without even trying to avoid the pitfalls OK, to be fair this is, ultimately, Puppet s grand fuckup, not yours, but you at least need to pretend to care There is no sane, standardised way in Puppet for multiple modules to install the same package. Let s say that a module I write needs to have a compiler. So I package "gcc": . Then someone else s module also wants a compiler, so it also package "gcc": . FWAKOOM! says Puppet. What the fuck? says the poor sysadmin, who just wanted both a virtualenv and an RVM environment on the one machine. Basically, using packages is going to make your module suck. Does that mean that this makes wide distribution of a large repository of modules written by different people nigh-on impossible? Yes. Fantastic.

13 October 2013

Matthew Palmer: RACK_ENV: It's not *for* you

(With apologies to Penny Arcade I m probably the last person on earth to realise this, but the RACK_ENV environment variable (and the -E option to rackup) isn t intended for consumption by anything other than Rack itself. If you want to indicate what sort of environment your application should run in, you re going to have to do it via some other means. Why is this? Because the interpretation that Rack applies to the value of RACK_ENV that you set makes no sense whatsoever outside of Rack. Valid options are development , deployment , and none . If you follow the usual Rails convention of naming your environments development , test , and production (and maybe staging if you re feeling adventurous), then in any environment other than development , you re not going to be telling Rack anything it understands. As I said, I may be the last person on earth to have worked this out, but I doubt it. There are plenty of bug reports and WTF? blog posts and Stack Overflow questions that appear to stem from people misunderstanding the purpose of RACK_ENV. Sadly, the Rack documentation is very quiet on the whole topic, and the only place that mentions how the environment is interpreted is in the comments for Rack::Server.new and that doesn t tie that environment to the -E option or RACK_ENV environment variable. At any rate, the take away is simple: unless you want Rack to pre-configure a bundle of middleware for you, RACK_ENV or rackup -E is not the configuration variable you re looking for. Use something else to tell your app how it s supposed to work.

10 October 2013

Matthew Palmer: How *not* to respond to interview feedback

Seen at $DAYJOB, from a candidate we rejected due to a pretty poor showing in a phone interview:
May I suggest that your company puts in place a more effective way of interviewing potential candidates. Please pass this to comment to your senior management so they may look at improving this process.
I m not sure I ve ever seen such a clear confirmation that our interviewing process is working as intended. While we hire for smart people who don t meekly accept the status quo, we d prefer it if our staff don t mouth off when things don t go their way. Customers tend to frown on such outbursts.

2 October 2013

Matthew Palmer: A vim-outliner cheatsheet

Being a bit of a vim-outliner fanatic, but having an inability to remember all of the various key combos, is a sad place to be. Having lived there for far too long, I finally got up the energy to learn enough Inkscape to produce my very own vim-outliner cheatsheet. One more thing I can ,,x

17 June 2013

Matthew Palmer: Thought for the day

When the Syrian Electronic Army hacked The Onion s twitter account, what did they do to cause panic and mayhem? Post real news stories?

14 May 2013

Matthew Palmer: A Modest Vocabulary Proposal

I would like to suggest that the word unprofessional be struck from the dictionary and anyone who uses it struck with a dictionary. It is a word which conveys no useful information or proposal for action, and is thus nothing but meaningless noise. The purpose of communication is to adjust another person s process of cognition. I ve heard it said that all communication is persuasion , which is quite true you re trying to persuade someone to change what they think. We can consider the intention and effectiveness of an attempt to communicate in this light. What is someone trying to achieve when they label a person or behaviour unprofessional ? If we re being charitable, we would probably say that they re trying to highlight that something is bad, or could be better. However, just stamping our foot and saying bad! isn t enough it s also important to provide some information that the recipient can act upon. The problem with the word unprofessional is that it really isn t specific enough on the subject of what is wrong . Have you ever had someone say something like, your behaviour yesterday was really unprofessional ? They re assuming you know what they re talking about and you might well have a reasonable guess but what if you guess wrong? Should you never do anything you did yesterday, just in case that particular thing was unprofessional? When I ve caught myself thinking, that was unprofessional , of my own behaviour, or someone else s, I think about what caused me to think that. Once I drill down into it, I usually come to the conclusion that what I really meant was, I don t like that . Since I m not paid to like things, that s pretty much irrelevant as a reason to tell someone not to do something. On the occasions when I come up with something more concrete, it is invariably a more useful expression than unprofessional . Things like, it frustrates the customer , or it pisses off the person sitting in the next cube are a much better expression of why something is bad than unprofessional . I d encourage everyone to keep a careful watch over themselves and those around them for use of the word. When you catch yourself saying it (or thinking it), examine your motives more closely. Whatever the more specific adjective is, use that instead. If it just comes down to I don t like that , at the very least say that to the person you re talking to. Don t try and hang anything grandiose on your personal prejudices. You might come off as being petty, but at least you ll be honest.

15 April 2013

Matthew Palmer: splunkd, Y U NO FOREGROUND?!?

I am led to believe that splunkd (some agent for feeding log entries into the Grand Log Analysis Tool Of Our Age ) has no capability for running itself in the foreground. This is stupid. Do not make these sorts of assumptions about how the user will want to run your software. Some people use sane service-management systems that are capable of handling the daemonisation for you and automatically restart the managed process on crash. These systems are typically much easier to configure and debug, and they don t need bloody PID files and the arguments about where to put them (tmpfs, inside or outside chroots oh my) and who should update them and how to reliably detect that they re out of date when they crash without causing race conditions and whether non-root-running processes should place their PID files in the same place and how do you deal with the permissions issues and bugger that for a game of skittles. In short, if you provide a service daemon and do not provide some well-documented means of saying don t background , I will hurt you. This goes double if your shitware is not open source.

11 April 2013

Matthew Palmer: RSpec the easy way

Anyone that has a fondness for good ol RSpec knows that there s a fair number of matchers and predicates and whatnot involved. Life isn t helped by the recent (as of 2.11) decision to switch to using expect everywhere instead of should (apparently should will be going away entirely at some point in the future). There is a good looking RSpec cheatsheet out on the net, but it dates from 2006, and things have changed since then. We re using RSpec at work a lot at the moment, though, so our tech writer kindly updated it for the new-style syntax, gave it a nice blue tint, and put it out there for the world at large to use. Here is our updated RSpec cheatsheet for anyone who is interested. I can tell you for certain that a double-sided, laminated version of this sucker looks very nice, and is a handy addition to the desk-of-many-things.

22 March 2013

Matthew Palmer: Managing your Databases

The Standalone Sysadmin asks, How do you manage database server instances? :
Do you have one (or a few) centralized database servers, either standalone or clustered, or do you spread the load like we are currently?
His argument for centralisation is one of easing the management burden of configuration and backups, whereas the distributed approach eliminates a central point of failure and performance degradation. I go for distributed, all the way. For a start, we run so many databases for so many customers that there s no way on earth we could stand up a small number of database servers and handle all the load (hell, we ve got single customers who consume a cluster of machines with 384GB of RAM and all the SSDs you can eat). Security and permissions is a whole other kettle of fish; the contortions we d have to do to allow customers the level of management they need with a centralised database system would be immense. Then there s the need of some customers for MySQL, some for PgSQL, different performance tuning for different workloads nope, centralised DBs don t work for us. Given this, we ve bitten the bullet and solved pretty much all of the management problems. Installation and configuration is all handled via Puppet, and backups are trivial the same system that installs the DB server itself also drops a hook script that the backup agent uses to know that it has to dump a database server. Monitoring that this backup is taking place successfully is also automatically provisioned, so we know that we re not missing anything. Ultimately, this same approach applies to practically anything that you re tossing up between centralised and distributed. At scale, you can never rely on centralisation, so you may as well bite the bullet and learn how to do it distributed pretty much from the start. That saves some serious system shock when you discover what your hardware vendor wants for the next step up in big iron hardware

25 February 2013

Matthew Palmer: libvirt errors that are not helpful

Since there is absolutely zero Google juice on this problem, here s some hints in case someone else is out there beating their heads on their keyboard in frustration. The problem: when trying to define a storage pool (or 90+% of other virsh commands), you get this sort of result:
# virsh pool-define /tmp/pooldef
error: Failed to define pool from /tmp/pooldef
error: this function is not supported by the connection driver: virStoragePoolDefineXML
Or this:
# virsh pool-create /tmp/pooldef
error: Failed to create pool from /tmp/pooldef
error: this function is not supported by the connection driver: virStoragePoolCreateXML
Not helpful at all. The problem is (or, at least it was for me) that I have both KVM and virtualbox installed (I prefer KVM, but vagrant uses virtualbox and I m playing around with it). It would appear that libvirt is preferring to use virtualbox over KVM, which is stupid because virtualbox doesn t appear to be fully supported (as evidenced by the extensive set of functions that are not supported by the virtualbox connection driver). The solution: edit /etc/libvirt/libvirt.conf, and ensure that the following line is defined:
uri_default = "qemu:///system"
This will tell libvirt to use KVM (via qemu) rather than virtualbox, and you can play with pools to your hearts content.

22 January 2013

Matthew Palmer: When is a guess not a guess?

when it s a prediction . In the 4th January edition of the Guardian Weekly , the front page story, entitled Meet the world s new boomers 1 contained this little gem:
Back in 2006, [PricewaterhouseCoopers] made some forecasts about what the global economy might look like in 2050, and it has now updated the predictions in the light of the financial crisis and its aftermath.
Delightful. They made some forecasts about what the global economy might look like. Given that they clearly didn t include any impact of the GFC in their forecasts, it clearly wasn t a particularly accurate forecast. Y know what an inaccurate prediction is called? Guesswork. Let s call a spade a spade here. I see this all the time, and it s starting to shit me. People making predictions and forecasts and projections hither and yon, and they re almost always complete bollocks, and they never get called on it. I read the Greater Fool blog now and then, and that blog is chock full of examples of people making predictions which have very little chance of being in any way accurate. While Dr Ben Goldacre and others are making inroads into requiring full disclosure in clinical trials, I m not aware of anyone taking a similar stand against charlatans making dodgy-as-hell predictions over and over again, with the sole purpose of getting attention, without any responsibility for the accuracy of those predictions. Is anyone aware of anyone doing work in this area, or do I need to register badpredictions.net and start calling out dodginess?

3 January 2013

Matthew Palmer: Everything's Better With Cats

The ancient Egyptians were a pretty cool bunch, but their worship of cats really added something to their civilisation (double bonus: their word for cat was mau ). The Internet itself, while undeniably a fantastic resource, reached new heights with the introduction of LOLCats. If you are cat-poor, you can swap your shabby tat for a tabby cat, while if you ve gone a bit overboard you can sell your excess cats to cat converters. However, cats have found minimal employment in systems administration. Until now. As the day job have been early adopters of btrfs, everyone at work has been very interested in the reported hash DoS of btrfs. It has been a topic of considerable discussion around the office. However, it can be a tough topic to explain to people less well versed in the arcana of computer science. Not to be deterred, Barney, our tech writer, took the standard explanation, added some cats, and came up with an explanation of the btrfs hash DoS that your parents can understand. The density of cat-related puns is impressive. (Incidentally, if you don t need cats to understand btrfs hash DoS attacks, and live in the Sydney area, you might be interested in working for Anchor as a sysadmin).

26 October 2012

Matthew Palmer: The e-mail PDA

I ve been a wannabe GTD afficionado for some years. I ve wanted to do it, but managing lists has always been something that has too much friction, overhead, or whatever. Finally, though, I think I might have found a way to manage lists that works. My use-case isn t unique, although I will concede I m perhaps being more dogmatic than most. I want something that: My previous attempt was a tool I called tagnote it was a vim-outliner file full of hierarchically organised outliner entries, with tags inlined. It was a neat idea, but it wasn t smooth to add/browse/delete items, and didn t work with my phone at all (trying to use vim for any length of time on a bottom-of-the-range Android phone would kill me). The current iteration, as the title of this post suggests, is a list manager that entirely uses e-mail. It really is a perfect symbiosis: So what have I got, exactly? It s fairly straightforward: That last point is the one I m really happy I achieved. I ve always been a fan of hide it until you need it , but my previous system didn t let me do that. Now, though, I have a separate list called tickler, and all the items in there have an X-Tickle header, which specifies the date I want to see them. Each night a cronjob runs through the tickler and moves anything for today into the INBOX. An X-Tickle-Repeat header lets me have things that repeat over and over again. So in short, using entirely open-source tools and a couple of hours of my time doing things I enjoy anyway (shell scripts! woo!), I ve now got a list manager that doesn t get in my way more than it absolutely has to. We ll see how long I last this time before I feel the urge to improve my lists again.

22 July 2012

Matthew Palmer: Podcasting protip

Don t spend the first two minutes of the first episode of your podcast telling everyone what a micropodcast is, and how iTunes only lets you have 20MB per episode. The only exception to this might be if you were making a podcast about podcasting. Which I wasn t. That is all.

19 July 2012

Matthew Palmer: Melt your cores with Rake's multitask

Reading documentation does pay off. Browsing through the Rakefile format documentation for Rake, just now, I found mention of the multitask method which declares that all of that task s prerequisites can be executed in parallel. A comparison run:
$ rake clean; time rake build
real    0m7.116s
user    0m6.788s
sys     0m0.260s
$rake clean; time rake multibuild
real    0m3.820s
user    0m8.809s
sys     0m0.288s
This is a trivially small build I m doing, I must admit, but halving the build time (in this case at least) pays huge dividends in my perceived productivity. It really blows the dust out of my CPU cores, too, which tend to be woefully underutilised (being this is a quad-core laptop and all). So I say unto you all: go forth and multitask!

23 May 2012

Matthew Palmer: Unhealthy Obsessions

This morning, I nearly missed getting off at my train station to go to work because I was too engrossed in writing documentation for the company s leave accounting system. I wonder if there s a 12-step program for people like me. It can t be healthy.

13 May 2012

Matthew Palmer: vmdksync helps you escape from VMware

When I wrote lvmsync late last year, I didn t realise I was being typecast. Before too long, I realised that the logic that I d implemented for lvmsync would also help me with a separate migration project I d been dreading getting the day job off VMware. Back in the early days of virtualisation, management made the decision to run VMware, for all the usual reasons ( commercially supported! , industry standard! , and so on). Unsurprisingly (to me, anyway) it didn t take too long for management to realise that it wasn t the best choice for us. When you ve got umpty-billion dollars to spend on hardware, software, and support, VMware might be the right option (although Amazon doesn t seem to think so). Anchor s company culture, on the other hand, is build around smart staff, simple systems over dumb staff, smart vendors , because no vendor is ever going to care about our customers as much as we do. So VMware was never going to work for us. Unfortunately, as happens all too often, once VMware was in place, there was very little motivation to get rid of it and move those customers onto the chosen replacement (that we were deploying all new customers on). I happen to think this is a terrible attitude in general one that makes life so much harder in the long term. I believe strongly in retrofitting old systems to keep them up-to-date with the current state of the art, and keeping technical debt under control. But, I wasn t running the show back when we stopped putting new customers on VMware, so the few VMware servers we had stayed around far longer than they should have. Recently, though, bad things started to happen. The VMware servers were starting to fall apart. The Windows machine we had to keep around to use the VMware management console started crapping out, and when the choice was between doing unspeakable things to Windows, and just ditching VMware well, it wasn t much of a choice. The only remaining question was how to do the migration off VMware with the least amount of downtime to our customers. I was really quite surprised that nobody out in Internet land appeared to have come up with a simple, robust tool to do this. Sure, some vendors had all-singing, all-dancing toolkits that cost ridiculous amounts of money, required you to install their agent on the machine involved, and promised the earth, but it all smelt of snakeoil and bullshit. In true hacker style, then, I decided to write something myself. The model I came up with mirrored lvmsync s quite closely because that one worked, and it turned out to be surprisingly easy to implement once I managed to reverse-engineer the file format (VMware has a PDF spec of a bunch of it s file formats, but whoever wrote it was enough of an evil genius to make it utterly incomprehensible to anyone who doesn t already know the file format, whilst making perfect sense to anyone who already does). The result: vmdksync. It is nothing but 80-odd lines of ruby whose sole purpose is to take a delta.vmdk file and write the changes that are stored in that file to a file or block device that is a copy of the flat.vmdk file that you can copy while the VM is still running (after you ve made a snapshot, of course). It helped me provide a painless migration path away from VMware, and I d be really pleased if it helped some other people do the same. Share and enjoy!

25 December 2011

Matthew Palmer: The Other Way...

Chris Siebenmann sez:
The profusion of network cables strung through doorways here demonstrates that two drops per sysadmin isn t anywhere near enough.
What I actually suspect it demonstrates is that Chris company hasn t learnt about the magic that is VLANs. All of the reasons he cites in the longer, explanatory blog post could be solved with VLANs. The only time you can t get away with one gigabit drop per office and an 8 port VLAN-capable switch is when you need high capacity, and given how many companies struggle by with wifi, I m going to guess that sustained gigabit-per-machine is not a common requirement. So, for Christmas, buy your colleages a bunch of gigabit VLAN capable switches, and you can avoid both the nightmare of not having enough network ports, and the more hideous tragedy of having to crawl around the roofspace and recable an entire office.

Next.

Previous.